69 research outputs found

    Dissociation of first- and second-order motion systems by perceptual learning

    Full text link
    Published in final edited form as: Atten Percept Psychophys. 2012 July ; 74(5): 1009–1019. doi:10.3758/s13414-012-0290-3.Previous studies investigating transfer of perceptual learning between luminance-defined (LD) motion and texture-contrast-defined (CD) motion tasks have found little or no transfer from LD to CD motion tasks but nearly perfect transfer from CD to LD motion tasks. Here, we introduce a paradigm that yields a clean double dissociation: LD training yields no transfer to the CD task, but more interestingly, CD training yields no transfer to the LD task. Participants were trained in two variants of a global motion task. In one (LD) variant, motion was defined by tokens that differed from the background in mean luminance. In the other (CD) variant, motion was defined by tokens that had mean luminance equal to the background but differed from the background in texture contrast. The task was to judge whether the signal tokens were moving to the right or to the left. Task difficulty was varied by manipulating the proportion of tokens that moved coherently across the four frames of the stimulus display. Performance in each of the LD and CD variants of the task was measured as training proceeded. In each task, training produced substantial improvement in performance in the trained task; however, in neither case did this improvement show any significant transfer to the nontrained task.This work was supported in part by NSF Award BCS-0843897 to Dr. Chubb and in part by Award Number RO1NS064100 from the National Institutes of Health, National Institute of Neurological Disorders and Stroke to Dr. Vaina. (BCS-0843897 - NSF; RO1NS064100 - National Institutes of Health, National Institute of Neurological Disorders and Stroke)Accepted manuscrip

    Motion sequence analysis in the presence of figural cues

    Full text link
    Published in final edited form as: Neurocomputing. 2015 January 5, 147: 485–491The perception of 3-D structure in dynamic sequences is believed to be subserved primarily through the use of motion cues. However, real-world sequences contain many figural shape cues besides the dynamic ones. We hypothesize that if figural cues are perceptually significant during sequence analysis, then inconsistencies in these cues over time would lead to percepts of non-rigidity in sequences showing physically rigid objects in motion. We develop an experimental paradigm to test this hypothesis and present results with two patients with impairments in motion perception due to focal neurological damage, as well as two control subjects. Consistent with our hypothesis, the data suggest that figural cues strongly influence the perception of structure in motion sequences, even to the extent of inducing non-rigid percepts in sequences where motion information alone would yield rigid structures. Beyond helping to probe the issue of shape perception, our experimental paradigm might also serve as a possible perceptual assessment tool in a clinical setting.The authors wish to thank all observers who participated in the experiments reported here. This research and the preparation of this manuscript was supported by the National Institutes of Health RO1 NS064100 grant to LMV. (RO1 NS064100 - National Institutes of Health)Accepted manuscrip

    Neuropsychological evidence for three distinct motion mechanisms

    Full text link
    Published in final edited form as: Neurosci Lett. 2011 May 16; 495(2): 102–106. doi:10.1016/j.neulet.2011.03.048.We describe psychophysical performance of two stroke patients with lesions in distinct cortical regions in the left hemisphere. Both patients were selectively impaired on direction discrimination in several local and global second-order but not first-order motion tasks. However, only patient FD was impaired on a specific bi-stable motion task where the direction of motion is biased by object similarity. We suggest that this bi-stable motion task may be mediated by a high-level attention or position based mechanism indicating a separate neurological substrate for a high-level attention or position-based mechanism. Therefore, these results provide evidence for the existence of at least three motion mechanisms in the human visual system: a low-level first- and second-order motion mechanism and a high-level attention or position-based mechanism.Accepted manuscrip

    An Effect of Relative Motion on Trajectory Discrimination

    Get PDF
    Psychophysical studies point to the existence of specialized mechanisms sensitive to the relative motion between an object and its background. Such mechanisms would seem ideal for the motion-based segmentation of objects; however, their properties and role in processing the visual scene remain unclear. Here we examine the contribution of relative motion mechanisms to the processing of object trajectory. In a series of four psychophysical experiments we examine systematically the effects of relative direction and speed differences on the perceived trajectory of an object against a moving background. We show that background motion systematically influences the discrimination of object direction. Subjects’ ability to discriminate direction was consistently better for objects moving opposite a translating background than for objects moving in the same direction as the background. This effect was limited to the case of a translating background and did not affect perceived trajectory for more complex background motions associated with self-motion. We interpret these differences as providing support for the role of relative motion mechanisms in the segmentation and representation of object motions that do not occlude the path of an observer’s self-motion

    Interaction of cortical networks mediating object motion detection by moving observers

    Full text link
    Published in final edited form as: Exp Brain Res. 2012 August ; 221(2): 177–189. doi:10.1007/s00221-012-3159-8.The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects’ psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.This work was supported by NIH grant RO1NS064100 to L.M.V. We thank Victor Solo for discussions regarding models of functional connectivity and our subjects for participating in the psychophysical and fMRI experiments. This research was carried out in part at the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, using resources provided by the Center for Functional Neuroimaging Technologies, P41RR14075, a P41 Regional Resource supported by the Biomedical Technology Program of the National Center for Research Resources (NCRR), National Institutes of Health. This work also involved the use of instrumentation supported by the NCRR Shared Instrumentation Grant Program and/or High-End Instrumentation Grant Program; specifically, grant number S10RR021110. (RO1NS064100 - NIH; National Center for Research Resources (NCRR), National Institutes of Health; S10RR021110 - NCRR)Accepted manuscrip

    A method for selecting an efficient diagnostic protocol for classification of perceptive and cognitive impairments in neurological patients

    Full text link
    "Published in final edited form as: Conf Proc IEEE Eng Med Biol Soc. 2011 ; 2011: 1129–1132. doi:10.1109/IEMBS.2011.6090264."An important and unresolved problem in the assessment of perceptual and cognitive deficits in neurological patients is how to choose from the many existing behavioral tests, a subset that is sufficient for an appropriate diagnosis. This problem has to be dealt with in clinical trials, as well as in rehabilitation settings and often even at bedside in acute care hospitals. The need for efficient, cost effective and accurate diagnostic-evaluations, in the context of clinician time constraints and concerns for patients’ fatigue in long testing sessions, make it imperative to select a set of tests that will provide the best classification of the patient’s deficits. However, the small sample size of the patient population complicates the selection methodology and the potential accuracy of the classifier. We propose a method that allows for ordering tests based on having progressive increases in classification using cross-validation to assess the classification power of the chosen test set. This method applies forward linear regression to find an ordering of the tests with leave-one-out cross-validation to quantify, without biasing to the training set, the classification power of the chosen tests.R01 NS064100 - NINDS NIH HHS; R01NS064100 - NINDS NIH HHSAccepted manuscrip

    The emotional valence of subliminal priming effects perception of facial expressions

    Full text link
    We investigated, in young healthy subjects, how the affective content of subliminally presented priming images and their specific visual attributes impacted conscious perception of facial expressions. The priming images were broadly categorised as aggressive, pleasant, or neutral and further subcategorised by the presence of a face and by the centricity (egocentric or allocentric vantage-point) of the image content. Subjects responded to the emotion portrayed in a pixelated target-face by indicating via key-press if the expression was angry or neutral. Priming images containing a face compared to those not containing a face significantly impaired performance on neutral or angry targetface evaluation. Recognition of angry target-face expressions was selectively impaired by pleasant prime images which contained a face. For egocentric primes, recognition of neutral target-face expressions was significantly better than of angry expressions. Our results suggest that, first, the affective primacy hypothesis which predicts that affective information can be accessed automatically, preceding conscious cognition, holds true in subliminal priming only when the priming image contains a face. Second, egocentric primes interfere with the perception of angry target-face expressions suggesting that this vantage-point, directly relevant to the viewer, perhaps engages processes involved in action preparation which may weaken the priority of affect processing.Accepted manuscrip

    Different Motion Cues Are Used to Estimate Time-to-arrival for Frontoparallel and Loming Trajectories

    Get PDF
    Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance

    Long-range coupling of prefrontal cortex and visual (MT) or polysensory (STP) cortical areas in motion perception

    Full text link
    To investigate how, where and when moving auditory cues interact with the perception of object-motion during self-motion, we conducted psychophysical, MEG, and fMRI experiments in which the subjects viewed nine textured objects during simulated forward self-motion. On each trial, one object was randomly assigned its own looming motion within the scene. Subjects reported which of four labeled objects had independent motion within the scene in two conditions: (1) visual information only and (2) with additional moving- auditory cue. In MEG, comparison of the two conditions showed: (i) MT activity is similar across conditions, (ii) late after the stimulus presentation there is additional activity in the auditory cue condition ventral to MT, (iii) with the auditory cue, the right auditory cortex (AC) shows early activity together with STS, (iv) these two activities have different time courses and the STS signals occur later in the epoch together with frontal activity in the right hemisphere, (v) for the visual-only condition activity in PPC (posterior parietal cortex) is stronger than in the auditory-cue condition. fMRI conducted for visual-only condition reveals activations in a network of parietal and frontal areas and in MT. In addition, Dynamic Granger Causality analysis showed for auditory cues a strong connection of the AC with STP but not with MT suggesting binding of visual and auditory information at STP. Also, while in the visual-only condition PFC is connected with MT, in the auditory-cue condition PFC is connected to STP (superior temporal polysensory) area. These results indicate that PFC allocates attention to the “object” as a whole, in STP to a moving visual-auditory object, and in MT to a moving visual object.Accepted manuscrip

    Two mechanisms for optic flow and scale change processing of looming

    Full text link
    Published in final edited form as: J Vis. ; 11(3): . doi:10.1167/11.3.5.The detection of looming, the motion of objects in depth, underlies many behavioral tasks, including the perception of self-motion and time-to-collision. A number of studies have demonstrated that one of the most important cues for looming detection is optic flow, the pattern of motion across the retina. Schrater et al. have suggested that changes in spatial frequency over time, or scale changes, may also support looming detection in the absence of optic flow (P. R. Schrater, D. C. Knill, & E. P. Simoncelli, 2001). Here we used an adaptation paradigm to determine whether the perception of looming from optic flow and scale changes is mediated by single or separate mechanisms. We show first that when the adaptation and test stimuli were the same (both optic flow or both scale change), observer performance was significantly impaired compared to a dynamic (non-motion, non-scale change) null adaptation control. Second, we found no evidence of cross-cue adaptation, either from optic flow to scale change, or vice versa. Taken together, our data suggest that optic flow and scale changes are processed by separate mechanisms, providing multiple pathways for the detection of looming.We thank Jonathan Victor and the anonymous reviewers of the paper for feedback and suggestions regarding the stimuli used here. This work was supported by NIH grant R01NS064100 to LMV. (R01NS064100 - NIH)Accepted manuscrip
    • …
    corecore